AIbase
Home
AI Tools
AI Models
MCP
AI NEWS
EN
Model Selection
Tags
Open Language Model

# Open Language Model

Olmo 2 0425 1B Instruct
Apache-2.0
OLMo 2 1B is a post-training variant of the allenai/OLMo-2-0425-1B-RLVR1 model, undergoing supervised fine-tuning, DPO training, and RLVR training, aiming to achieve state-of-the-art performance across multiple tasks.
Large Language Model Transformers English
O
allenai
5,127
33
Olmo 2 0425 1B SFT
Apache-2.0
OLMo 2 1B SFT is a supervised fine-tuned version of the OLMo-2-0425-1B model, trained on the Tulu 3 dataset, designed to achieve state-of-the-art performance across multiple tasks.
Large Language Model Transformers English
O
allenai
1,759
2
Recurrentgemma 2b
RecurrentGemma is an open language model family developed by Google based on a novel recurrent architecture, offering both pre-trained and instruction-tuned versions suitable for various text generation tasks.
Large Language Model Transformers
R
google
1,941
92
Olmo 7B Instruct
Apache-2.0
OLMo 7B Instruct is an open language model trained on the Dolma dataset, optimized with SFT and DPO, specifically designed for QA tasks.
Large Language Model Transformers English
O
allenai
365
53
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
English简体中文繁體中文にほんご
© 2025AIbase